The visualization on the previous slide demonstrated the critical divergence in algorithm growth rates. We now formalize this by establishing Big-O notation, $O(f(n))$, as the standard metric for quantifying worst-case efficiency.
The goal of algorithmic analysis is to determine the function $f(n)$ that describes how the required time scales as the input size $n$ increases toward infinity. We always seek the tightest upper bound—the simplest and slowest function that dominates the actual runtime $t$.
- We use Big-O, $O(\dots)$, to describe the worst-case running time of an algorithm.
- The term represents the asymptotic growth rate, ignoring constant coefficients and lower-order terms.
- Efficiency is categorized by complexity class; the goal is always to find the lowest class possible (e.g., $O(1)$ or $O(\log2(n))$).
Analysis Summary & Complexity Table
| Complexity Class | Growth Name | Practical Meaning |
|---|---|---|
| $O(1)$ | Constant | Independent of input size $n$. |
| $O(\log2(n))$ | Logarithmic | Time halves with each operation. |
| $O(n)$ | Linear | Time is directly proportional to $n$. |
| $O(n \log n)$ | Log-Linear | Very fast, slightly worse than $O(n)$. |
| $O(n^2)$ | Quadratic | Time increases exponentially in $n^2$. |
| $O(2^n)$ | Exponential | Impractical for even moderate $n$. |